88 research outputs found

    On the Meaning of AI Safety

    Get PDF
    In this paper, I propose a definition of AI safety. I then explore the fundamental concepts that underlie this definition. My aim is to contribute to a constructive discussion and further the discourse on AI safety

    Towards Measurement of Confidence in Safety Cases

    Get PDF
    Arguments in safety cases are predominantly qualitative. This is partly attributed to the lack of sufficient design and operational data necessary to measure the achievement of high-dependability targets, particularly for safety-critical functions implemented in software. The subjective nature of many forms of evidence, such as expert judgment and process maturity, also contributes to the overwhelming dependence on qualitative arguments. However, where data for quantitative measurements is systematically collected, quantitative arguments provide far more benefits over qualitative arguments, in assessing confidence in the safety case. In this paper, we propose a basis for developing and evaluating integrated qualitative and quantitative safety arguments based on the Goal Structuring Notation (GSN) and Bayesian Networks (BN). The approach we propose identifies structures within GSN-based arguments where uncertainties can be quantified. BN are then used to provide a means to reason about confidence in a probabilistic way. We illustrate our approach using a fragment of a safety case for an unmanned aerial system and conclude with some preliminary observation

    Artificial intelligence in health care: accountability and safety

    Get PDF
    The prospect of patient harm caused by the decisions made by an artificial intelligence-based clinical tool is something to which current practices of accountability and safety worldwide have not yet adjusted. We focus on two aspects of clinical artificial intelligence used for decision-making: moral accountability for harm to patients; and safety assurance to protect patients against such harm. Artificial intelligence-based tools are challenging the standard clinical practices of assigning blame and assuring safety. Human clinicians and safety engineers have weaker control over the decisions reached by artificial intelligence systems and less knowledge and understanding of precisely how the artificial intelligence systems reach their decisions. We illustrate this analysis by applying it to an example of an artificial intelligence-based system developed for use in the treatment of sepsis. The paper ends with practical suggestions for ways forward to mitigate these concerns. We argue for a need to include artificial intelligence developers and systems safety engineers in our assessments of moral accountability for patient harm. Meanwhile, none of the actors in the model robustly fulfil the traditional conditions of moral accountability for the decisions of an artificial intelligence system. We should therefore update our conceptions of moral accountability in this context. We also need to move from a static to a dynamic model of assurance, accepting that considerations of safety are not fully resolvable during the design of the artificial intelligence system before the system has been deployed

    Dynamic Safety Cases for Through-Life Safety Assurance

    Get PDF
    We describe dynamic safety cases, a novel operationalization of the concept of through-life safety assurance, whose goal is to enable proactive safety management. Using an example from the aviation systems domain, we motivate our approach, its underlying principles, and a lifecycle. We then identify the key elements required to move towards a formalization of the associated framework

    Integrating Safety Assessment into the Design of Healthcare Service-Oriented Architectures

    Get PDF
    Most healthcare organisations are service-oriented, fundamentally centred on critical services provided by medical and nursing staff. Increasingly, these human-centric services rely on software-intensive systems, i.e. medical devices and health informatics, for improving different aspects of healthcare, e.g. enhancing efficiency through automation and patient safety through smart alarm systems. However, many healthcare services are categorised as high risk and as such it is vital to analyse the ways in which the software-based systems can contribute to unintentional harm and potentially compromise patient safety. This paper proposes an approach to modelling and analysing Service-Oriented Architectures (SOAs) used in healthcare, with emphasis on identifying and classifying potential hazardous behaviour. The paper also considers how the safety case for these SOAs can be developed in a modular manner. The approach is illustrated through a case study based on three services: ambulance, electronic health records and childbirth services

    Safe Reinforcement Learning for Sepsis Treatment

    Get PDF
    Sepsis, a life-threatening illness, is estimated to be the primary cause of death for 50,000 people a year in the UK and many more worldwide. Managing the treatment of sepsis is very challenging as it is frequently missed, at an early stage, and the optimal treatment is not yet clear. There are promising attempts to use Reinforcement Learning (RL) to learn the optimal strategy to treat sepsis patients, especially for the administration of intravenous fluids and vasopressors. However, RL agents only take the current state of patients into account when recommending the dosage of vasopressors. This is inconsistent with current clinical safety practice in which the dosage of vasopressors is increased or decreased gradually. A sudden major change of the dosage might cause significant harm to patients and as such is considered unsafe in sepsis treatment. In this paper, we have adapted one of the deep RL methods published previously and evaluated it to assess whether it has this kind of sudden major change when recommending the vasopressor dosage. Then, we have modified this method to address the above safety constraint and learnt a safer policy by incorporating current clinical knowledge and practice

    Enhancing Covid-19 Decision-Making by Creating an Assurance Case for Simulation Models

    Full text link
    Simulation models have been informing the COVID-19 policy-making process. These models, therefore, have significant influence on risk of societal harms. But how clearly are the underlying modelling assumptions and limitations communicated so that decision-makers can readily understand them? When making claims about risk in safety-critical systems, it is common practice to produce an assurance case, which is a structured argument supported by evidence with the aim to assess how confident we should be in our risk-based decisions. We argue that any COVID-19 simulation model that is used to guide critical policy decisions would benefit from being supported with such a case to explain how, and to what extent, the evidence from the simulation can be relied on to substantiate policy conclusions. This would enable a critical review of the implicit assumptions and inherent uncertainty in modelling, and would give the overall decision-making process greater transparency and accountability.Comment: 6 pages and 2 figure

    Supporting the automated generation of modular product line safety cases

    Get PDF
    Abstract The effective reuse of design assets in safety-critical Software Product Lines (SPL) would require the reuse of safety analyses of those assets in the variant contexts of certification of products derived from the SPL. This in turn requires the traceability of SPL variation across design, including variation in safety analysis and safety cases. In this paper, we propose a method and tool to support the automatic generation of modular SPL safety case architectures from the information provided by SPL feature modeling and model-based safety analysis. The Goal Structuring Notation (GSN) safety case modeling notation and its modular extensions supported by the D-Case Editor were used to implement the method in an automated tool support. The tool was used to generate a modular safety case for an automotive Hybrid Braking System SPL
    • …
    corecore